8 research outputs found

    Multiprocessor fixed priority scheduling with limited preemptions

    Get PDF
    Challenges associated with allowing preemptions and migrations are compounded in multicore systems, particularly under global scheduling policies, because of the potentially high overheads. For example, multiple levels of cache greatly increase preemption and migration related overheads as well as the difficulty involved in accurately accounting for them, leading to substantially inflated worst-case execution times (WCETs). Preemption and migration related overheads can be significantly reduced, both in number and in size, by using fixed preemption points in the tasks' code; thus dividing each task into a series of non-preemptive regions (NPRs). This leads to an additional consideration in the scheduling policy. When a high priority task is released and all of the processors are executing non-preemptive regions of lower priority tasks, then there is a choice to be made in terms of how to manage the next preemption. With an eager approach the first lower priority task to reach a preemption point is preempted even if it is not the lowest priority running task. Alternatively, with a lazy approach, preemption is delayed until the lowest priority currently running task reaches its next preemption point. In this paper, we show that under global fixed priority scheduling with eager preemptions each task suffers from at most a single priority inversion each time it resumes execution. Building on this observation, we derive a new response time based schedulability test for tasks with fixed preemption points. Experimental evaluations show that global fixed priority scheduling with eager preemptions is significantly more effective than with lazy preemption using link based scheduling in terms of task set schedulability

    Exact Speedup Factors and Sub-Optimality for Non-Preemptive Scheduling

    Get PDF
    Fixed priority scheduling is used in many real-time systems; however, both preemptive and non-preemptive variants (FP-P and FP-NP) are known to be sub-optimal when compared to an optimal uniprocessor scheduling algorithm such as preemptive earliest deadline first (EDF-P). In this paper, we investigate the sub-optimality of fixed priority non-preemptive scheduling. Specifically, we derive the exact processor speed-up factor required to guarantee the feasibility under FP-NP (i.e. schedulability assuming an optimal priority assignment) of any task set that is feasible under EDF-P. As a consequence of this work, we also derive a lower bound on the sub-optimality of non-preemptive EDF (EDF-NP). As this lower bound matches a recently published upper bound for the same quantity, it closes the exact sub-optimality for EDF-NP. It is known that neither preemptive, nor non-preemptive fixed priority scheduling dominates the other, in other words, there are task sets that are feasible on a processor of unit speed under FP-P that are not feasible under FP-NP and vice-versa. Hence comparing these two algorithms, there are non-trivial speedup factors in both directions. We derive the exact speed-up factor required to guarantee the FP-NP feasibility of any FP-P feasible task set. Further, we derive the exact speed-up factor required to guarantee FP-P feasibility of any constrained-deadline FP-NP feasible task set

    Resource Augmentation for Performance Guarantees in Embedded Real-time Systems

    No full text
    Real-time scheduling policies have been widely studied, with many known schedulability and feasibility analysis techniques for different task models, that have advanced the state-of-the-art. Most of these techniques are typically derived under the assumption of negligible runtime overheads which may not be realistic for modern embedded real-time systems, and hence potentially compromises the guarantees on their correct behaviors. This calls for methods to reason about the functioning of the system under the presence of such overheads as well as to predictably control them. Controlling these overheads may place additional performance demands, consequently requiring more resources such as faster processors. At the same time, the need for energy efficiency in these class of systems further complicates the problem and necessitates a holistic approach. In this thesis, we apply resource augmentation, viz., processor speed-up, to guarantee desired real-time properties even under the presence of runtime overheads. We specifically consider preemptions and faults that, at runtime, manifest as overheads in the system in various ways. Our aim is to provide specified non-preemption and fault tolerance feasibility guarantees in a real-time system. We first propose offline and online methods, that uses CPU frequency scaling, to control the number of preemptions in periodic and sporadic task systems, under a preemptive Fixed Priority Scheduling (FPS) policy. Furthermore, we derive the resource augmentation bound, specifically the upper-bound on the lowest processor speed, that guarantees the feasibility of a specified non-preemption behavior for any real-time task. We show that, for any task Ti , the resource augmentation bound that guarantees a non- reemptive execution for a specified duration Li , is given by 4Li/Dmin, where Dmin  is the shortest deadline in the task set. Consequently, we show that the upper-bound on the lowest processor speed that guarantees the feasibility of a non-preemptive schedule for the task set is 4Cmax/Dmin, where Cmax  is the largest execution time in the task set. We then propose a method to guarantee specified upper-bounds on the preemption related overheads in the schedule. We first translate the requirements of meeting specified upper-bounds on the preemption related overheads to a set of non-preemption requirements for the task set. The resource augmentation bound in conjunction with a sensitivity analysis is used to calculate the optimal processor speed that guarantees the derived non-preemption requirements, achieving the specified bounds on the preemption related costs. Finally, we derive the resource augmentation bound that guarantees the fault tolerance feasibility of a set of real-time tasks under an error burst of known length. We show that if the error burst length is no longer than half the shortest deadline in the task set, the resource augmentation bound that guarantees fault tolerance feasibility is 6.  Our contribution bounds the extra resources, specifically the required processor speed-up, that provides specified non-preemption and fault tolerance feasibility guarantees in a real-time system. It allows us to quantify the 'goodness' of non-preemptive scheduling, referred to as its sub-optimality, as compared to an optimal uni-processor scheduling algorithm, in terms of the required processor speed-up that guarantees a non-preemptive schedule for any uni-processor feasible task set. We intend to extend this work to provide non-preemption and fault tolerance feasibility guarantees in multi-processor systems

    Limited Preemptive Scheduling in Real-time Systems

    No full text
    Preemptive and non-preemptive scheduling paradigms typically introduce undesirable side effects when scheduling real-time tasks, mainly in the form of preemption overheads and blocking, that potentially compromise timeliness guarantees. The high preemption overheads in preemptive real-time scheduling may imply high resource utilization, often requiring significant over-provisioning, e.g., pessimistic Worst Case Execution Time (WCET) approximations. Non-preemptive scheduling, on the other hand, can be infeasible even for tasksets with very low utilization, due to the blocking on higher priority tasks, e.g., when one or more tasks have WCETs greater than the shortest deadline. Limited preemptive scheduling facilitates the reduction of both preemption related overheads as well as blocking by deferring preemptions to favorable locations in the task code. In this thesis, we investigate the feasibility of limited preemptive scheduling of real-time tasks on uniprocessor and multiprocessor platforms. We derive schedulability tests for global limited preemptive scheduling under both Earliest Deadline First (EDF) and Fixed Priority Scheduling (FPS) paradigms. The tests are derived in the context of two major mechanisms for enforcing limited preemptions, viz., defer preemption for a specified duration (i.e., Floating Non-Preemptive Regions) and defer preemption to the next specified location in the task code (i.e., Fixed Preemption Points). Moreover, two major preemption approaches are considered, viz., wait for the lowest priority job to become preemptable (i.e., a Lazy Preemption Approach (LPA)) and preempt the first executing lower priority job that becomes preemptable (i.e., an Eager Preemption Approach (EPA)). Evaluations using synthetically generated tasksets indicate that adopting an eager preemption approach is beneficial in terms of schedulability in the context of global FPS. Further evaluations simulating different global limited preemptive scheduling algorithms expose runtime anomalies with respect to the observed number of preemptions, indicating that limited preemptive scheduling may not necessarily reduce the number of preemptions in multiprocessor systems. We then theoretically quantify the sub-optimality (the worst-case performance) of limited preemptive scheduling on uniprocessor and multiprocessor platforms using resource augmentation, e.g., processor speed-up factors to achieve optimality. Finally, we propose a sensitivity analysis based methodology to control the preemptive behavior of real-time tasks using processor speed-up, in order to satisfy multiple preemption behavior related constraints. The results presented in this thesis facilitate the analysis of limited preemptively scheduled real-time tasks on uniprocessor and multiprocessor platforms.The examining committee consists of Professor Giorgio Buttazzo, Sant’Anna School of Advance studies-Pisa; Professor Gerhard Fohler, Technical University Kaiserslautern; Associate Professor Liliana Cucu-Grosjean, INRIA.Reserve: Associate Professor Damir Isovic, MDH.CONTESS

    Limited Preemptive Scheduling in Real-time Systems

    No full text
    Preemptive and non-preemptive scheduling paradigms typically introduce undesirable side effects when scheduling real-time tasks, mainly in the form of preemption overheads and blocking, that potentially compromise timeliness guarantees. The high preemption overheads in preemptive real-time scheduling may imply high resource utilization, often requiring significant over-provisioning, e.g., pessimistic Worst Case Execution Time (WCET) approximations. Non-preemptive scheduling, on the other hand, can be infeasible even for tasksets with very low utilization, due to the blocking on higher priority tasks, e.g., when one or more tasks have WCETs greater than the shortest deadline. Limited preemptive scheduling facilitates the reduction of both preemption related overheads as well as blocking by deferring preemptions to favorable locations in the task code. In this thesis, we investigate the feasibility of limited preemptive scheduling of real-time tasks on uniprocessor and multiprocessor platforms. We derive schedulability tests for global limited preemptive scheduling under both Earliest Deadline First (EDF) and Fixed Priority Scheduling (FPS) paradigms. The tests are derived in the context of two major mechanisms for enforcing limited preemptions, viz., defer preemption for a specified duration (i.e., Floating Non-Preemptive Regions) and defer preemption to the next specified location in the task code (i.e., Fixed Preemption Points). Moreover, two major preemption approaches are considered, viz., wait for the lowest priority job to become preemptable (i.e., a Lazy Preemption Approach (LPA)) and preempt the first executing lower priority job that becomes preemptable (i.e., an Eager Preemption Approach (EPA)). Evaluations using synthetically generated tasksets indicate that adopting an eager preemption approach is beneficial in terms of schedulability in the context of global FPS. Further evaluations simulating different global limited preemptive scheduling algorithms expose runtime anomalies with respect to the observed number of preemptions, indicating that limited preemptive scheduling may not necessarily reduce the number of preemptions in multiprocessor systems. We then theoretically quantify the sub-optimality (the worst-case performance) of limited preemptive scheduling on uniprocessor and multiprocessor platforms using resource augmentation, e.g., processor speed-up factors to achieve optimality. Finally, we propose a sensitivity analysis based methodology to control the preemptive behavior of real-time tasks using processor speed-up, in order to satisfy multiple preemption behavior related constraints. The results presented in this thesis facilitate the analysis of limited preemptively scheduled real-time tasks on uniprocessor and multiprocessor platforms.The examining committee consists of Professor Giorgio Buttazzo, Sant’Anna School of Advance studies-Pisa; Professor Gerhard Fohler, Technical University Kaiserslautern; Associate Professor Liliana Cucu-Grosjean, INRIA.Reserve: Associate Professor Damir Isovic, MDH.CONTESS

    Ethics Aspects of Embedded and Cyber-Physical Systems

    No full text
    Abstract—The growing complexity of software employed in the cyber-physical domain is calling for a thorough study of both its functional and extra-functional properties. Ethical aspects are among important extra-functional properties, that cover the whole life cycle with different stages from design, development, deployment/production to use of cyber physical systems. One of the ethical challenges involved is the question of identifying the responsibilities of each stakeholder associated with the development and use of a cyber-physical system. This challenge is made even more pressing by the introduction of autonomous increasingly intelligent systems that can perform functionalities without human intervention, because of the lack of experience, best practices and policies for such technology. In this article, we provide a framework for responsibility attribution based on the amount of autonomy and automation involved in AI based cyber-physical systems. Our approach en-ables traceability of anomalous behaviors back to the responsible agents, be they human or software, allowing us to identify and separate the ”responsibility ” of the decision-making software from human responsibility. This provides us with a framework to accommodate the ethical ”responsibility ” of the software for AI based cyber-physical systems that will be deployed in the future, underscoring the role of ethics as an important extra-functional property. Finally, this systematic approach makes apparent the need for rigorous communication protocols between different actors associated with the development and operation of cyber-physical systems that further identifies the ethical challenges involved in the form of group responsibilities. I

    Using Processor Speed-up to Control Preemption Related Costs

    No full text
    I. RELATED WORK Preemptive real-time schedulers are associated with preemption related overheads, and their effects are challenging to analyze because they typically vary with the point of preemption e.g., cache related preemption delays, and even with the state of the physical process that the real-time system is controlling. Moreover, preemptive scheduling typically requires the use of resource access protocols [1] to enable mutual exclusion, in cases where tasks communicate through shared resources. These resource access protocols, though predictable, introduce schedulability overheads in the system, as well as may lead to pessimistic assumptions in the schedulability analysis. Even though preemptive scheduling schemes are used in a large number of applications, mostly due to its ability to achieve high processor utilization, the detrimental impact of preemptions is widely recognized in the community The applicability of a non-preemptive scheduling scheme, on the other hand, is limited to only a small fraction of the feasible task sets [8] due to its inability to fully utilize the computational resources in most of the cases In order to take advantage of the benefits of both preemptive and non-preemptive scheduling paradigms, various limited preemption scheduling models were proposed, a detailed survey of which can be found in Of the several methods that have been proposed to reduce the number of preemptions in real-time scheduling, Preemption Threshold Scheduling (PTS) for FPS was first introduced in the ThreadX operating system by Lami

    Quantifying the Exact Sub-Optimality of Non-Preemptive Scheduling

    No full text
    International audienceFixed priority scheduling is used in many real-time systems; however, both preemptive and non-preemptive variants (FP-P and FP-NP) are known to be sub-optimal when compared to an optimal uniprocessor scheduling algorithm such as preemptive Earliest Deadline First (EDF-P). In this paper, we investigate the sub-optimality of fixed priority non-preemptive scheduling. Specifically, we derive the exact processor speed-up factor required to guarantee the feasibility under FP-NP (i.e. schedulablability assuming an optimal priority assignment) of any task set that is feasible under EDF-P. As a consequence of this work, we also derive a lower bound on the sub-optimality of non-preemptive EDF (EDF-NP), which since it matches a recently published upper bound gives the exact sub-optimality for EDF-NP. It is known that neither preemptive, nor non-preemptive fixed priority scheduling dominates the other, i.e., there are task sets that are feasible on a processor of unit speed under FP-P that are not feasible under FP-NP and vice-versa. Hence comparing these two algorithms, there are non-trivial speedup factors in both directions. We derive the exact speed-up factor required to guarantee the FP-NP feasibility of any FP-P feasible task set. Further, we derive upper and lower bounds on the speed-up factor required to guarantee FP-P feasibility of any FP-NP feasible task set. Empirical evidence suggests that the lower bound may be tight, and hence equate to the exact speed-up factor in this case
    corecore